42 research outputs found

    Report on Strategic Initiative to Provide Enhanced Intellectual Access to NYUCurated Digital Collections

    Get PDF
    This report addresses Goal no. 4 of the NYU Division of Libraries’ Strategic Plan 2013-2017, namely, “Establish processes and support structures that ensure we can select, acquire, preserve, and provide access to the full spectrum of research materials,” and specifically Initiative 4.3, “a plan to provide intellectual access to NYU-curated digital collections via the library's primary discovery-and-access interfaces.” Since the Initiative’s inception in July 2013, participants have identified and prioritized eligible collections, collected user stories, prototyped the “Ichabod” tool for metadata aggregation and normalization, mapped metadata elements to a local Nyucore schema, and harvested the processed metadata into the development instance of BobCat. The Ichabod tool is based on Fedora, Hydra, Solr, and Blacklight. It was implemented using Agile methodology and involving developers from DLTS, KADD, and Web Services. The emerging code base, processes, and working relationships place NYU in a strong position to solve local discovery problems as well as innovate in the field of repository metadata management and enrichment

    Report on Strategic Initiative to Provide Enhanced Intellectual Access to NYUCurated Digital Collections

    Get PDF
    This report addresses Goal no. 4 of the NYU Division of Libraries’ Strategic Plan 2013-2017, namely, “Establish processes and support structures that ensure we can select, acquire, preserve, and provide access to the full spectrum of research materials,” and specifically Initiative 4.3, “a plan to provide intellectual access to NYU-curated digital collections via the library's primary discovery-and-access interfaces.” Since the Initiative’s inception in July 2013, participants have identified and prioritized eligible collections, collected user stories, prototyped the “Ichabod” tool for metadata aggregation and normalization, mapped metadata elements to a local Nyucore schema, and harvested the processed metadata into the development instance of BobCat. The Ichabod tool is based on Fedora, Hydra, Solr, and Blacklight. It was implemented using Agile methodology and involving developers from DLTS, KADD, and Web Services. The emerging code base, processes, and working relationships place NYU in a strong position to solve local discovery problems as well as innovate in the field of repository metadata management and enrichment

    Toward Semantic Metadata Aggregation for DPLA and Beyond: A Report of the ALCTS CaMMS Heads of Cataloging Interest Group, Orlando, June 2016

    Get PDF
    DPLA content hubs and service hubs face similar challenges in aggregating metadata. These include quality assurance, reconciliation of terms, and conforming source data to the DPLA application profile. An area receiving special attention is the clarification and mapping of rights statements. In some cases, there is no information in the record and it needs to be supplied. In others, there may be notes with vague or irregular wording, and these need to be mapped to a controlled vocabulary in order to be useful in discovery systems (e.g., through faceting and filtering). Rightsstatements.org is helping to make this possible by providing unambiguous statements backed up by persistent URIs. For both the NYPL and the MDL, serving as a DPLA hub aligns with their institutional missions. By aggregating and enriching cultural heritage data from hub participants, they make their collections more discoverable on the Web and provide a valuable public service. And in order to provide additional value, both the NYPL and the MDL hubs are considering ways to push enhanced metadata (e.g., place names enriched with geographic coordinates) back to their original repositories. Practitioners and managers of cataloging and metadata services have an important role to play in large-scale aggregation. They can ensure that when data sets from multiple sources are combined and normalized, that the underlying data semantics are preserved. Knowledge of resource description standards and controlled vocabularies continue to be highly valued, but must be applied at scale. An understanding of schema crosswalks continues to be important for aligning metadata with target applications. Metadata audits and index-based faceting can expose problems, while tools like Open Refine and Python can be used for programmatic remediation

    Broken-World Vocabularies

    Get PDF
    There is a growing interest in vocabularies as an important part of the infrastructure of library metadata on the Semantic Web. This article proposes that the framework of "maintenance, breakdown and repair", transposed from the field of Science and Technology Studies, can help illuminate and address vulnerabilities in this emerging infrastructure. In particular, Steven Jackson's concept of "broken world thinking" can shed light on the role of "maintainers" in sustainable innovation and infrastructure. By viewing vocabularies through the lens of broken world thinking, it becomes easier to see the gaps — and to see those who see the gaps — and build maintenance functions directly into tools, workflows, and services. It is hoped that this article will expand the conversation around bibliographic best practices in the context of the Web

    Investigating Multilingual, Multi-script Support in Lucene/Solr Library Applications

    Get PDF
    Yale has developed over many years a highly-structured, high-quality multilingual catalog of bibliographic data. Almost 50% of the collection represents non-English materials in over 650 languages, and includes many different non-Roman scripts. Faculty, students, researchers, and staff would like to make full use of this original script content for resource discovery. While the underlying textual data are in place, effective indexing, retrieval and display functionality for the non-Roman script content is not available within our bibliographic discovery applications, Orbis and Yufind. Opportunities now exist in the Unicode, Lucene/Solr computing environment to bridge the functionality gap and achieve internationalization of the Yale Library catalog. While most parts of this study focus on the Yale environment, in the absence of other such studies it is hoped that the findings will be of interest to a much larger community.Arcadia Foundatio

    Energy and the military: Convergence of security, economic, and environmental decision-making

    Get PDF
    Energy considerations are core to the missions of armed forces worldwide. The interaction between military energy issues and non-military energy issues is not often explicitly treated in the literature or media, although issues around clean energy have increased awareness of this interaction. The military has also long taken a leadership role on research and development (R&D) and procurement of specific energy technologies. More recently, R&D leadership has moved to the energy efficiency of home-country installations, and the development of renewable energy projects for areas as diverse as mini-grids for installations, to alternative fuels for major weapons systems. In this paper we explore the evolving relationship between energy issues and defense planning, and show how these developments have implications for military tactics and strategy as well as for civilian energy policy

    Graphic loans: East Asia and beyond

    Get PDF
    The national languages of East Asia (Chinese, Japanese, Korean and Vietnamese) have made extensive use of a type of linguistic borrowing sometimes referred to as a 'graphic loan'. Such loans have no place in the conventional classification of loans based on Haugen (1950) or Weinreich (1953), and research on loan word theory and phonology generally overlooks them. The classic East Asian phenomenon is discussed and a framework is proposed to describe its mechanism. It is argued that graphic loans are more than just 'spelling pronunciations', because they are a systematic and widespread process, independent of but not inferior to phonological borrowing. The framework is then expanded to cover a range of other cases of borrowing between languages to show that graphic loans are not a uniquely East Asian phenomenon, and therefore need to be considered as a major category of loan

    Toward Semantic Metadata Aggregation for DPLA and Beyond: A Report of the ALCTS CaMMS Heads of Cataloging Interest Group, Orlando, June 2016

    Get PDF
    DPLA content hubs and service hubs face similar challenges in aggregating metadata. These include quality assurance, reconciliation of terms, and conforming source data to the DPLA application profile. An area receiving special attention is the clarification and mapping of rights statements. In some cases, there is no information in the record and it needs to be supplied. In others, there may be notes with vague or irregular wording, and these need to be mapped to a controlled vocabulary in order to be useful in discovery systems (e.g., through faceting and filtering). Rightsstatements.org is helping to make this possible by providing unambiguous statements backed up by persistent URIs. For both the NYPL and the MDL, serving as a DPLA hub aligns with their institutional missions. By aggregating and enriching cultural heritage data from hub participants, they make their collections more discoverable on the Web and provide a valuable public service. And in order to provide additional value, both the NYPL and the MDL hubs are considering ways to push enhanced metadata (e.g., place names enriched with geographic coordinates) back to their original repositories. Practitioners and managers of cataloging and metadata services have an important role to play in large-scale aggregation. They can ensure that when data sets from multiple sources are combined and normalized, that the underlying data semantics are preserved. Knowledge of resource description standards and controlled vocabularies continue to be highly valued, but must be applied at scale. An understanding of schema crosswalks continues to be important for aligning metadata with target applications. Metadata audits and index-based faceting can expose problems, while tools like Open Refine and Python can be used for programmatic remediation
    corecore